6 research outputs found

    Electric Vehicles Plug-In Duration Forecasting Using Machine Learning for Battery Optimization

    Get PDF
    The aging of rechargeable batteries, with its associated replacement costs, is one of the main issues limiting the diffusion of electric vehicles (EVs) as the future transportation infrastructure. An effective way to mitigate battery aging is to act on its charge cycles, more controllable than discharge ones, implementing so-called battery-aware charging protocols. Since one of the main factors affecting battery aging is its average state of charge (SOC), these protocols try to minimize the standby time, i.e., the time interval between the end of the actual charge and the moment when the EV is unplugged from the charging station. Doing so while still ensuring that the EV is fully charged when needed (in order to achieve a satisfying user experience) requires a “just-in-time” charging protocol, which completes exactly at the plug-out time. This type of protocol can only be achieved if an estimate of the expected plug-in duration is available. While many previous works have stressed the importance of having this estimate, they have either used straightforward forecasting methods, or assumed that the plug-in duration was directly indicated by the user, which could lead to sub-optimal results. In this paper, we evaluate the effectiveness of a more advanced forecasting based on machine learning (ML). With experiments on a public dataset containing data from domestic EV charge points, we show that a simple tree-based ML model, trained on each charge station based on its users’ behaviour, can reduce the forecasting error by up to 4× compared to the simple predictors used in previous works. This, in turn, leads to an improvement of up to 50% in a combined aging-quality of service metric

    Digital Twins for Electric Vehicle SoX Battery Modeling: Status and Proposed Advancements

    No full text
    The State of X (SoX) variables, where X stands for Charge, Health, and Energy, are important in battery systems, as they serve as inputs for many algorithms responsible for monitoring, controlling, and protecting the battery pack. SoX monitoring and estimation is even more crucial in electric vehicles, whose batteries are crucial to ensure their operation and are, at the same time, subject to aging and performance deterioration over time. For this reason, many solutions have been proposed for SoX monitoring, falling under the umbrella of Battery Digital Twins. This work reviews the current status and challenges and proposes a structure of a battery digital twin designed to reflect battery SoX at the run time accurately. To ensure a high degree of correctness concerning non-linear phenomena, the digital twin relies on data-driven models trained on traces of battery evolution over time, retrained periodically to reflect the impact of aging. The proposed digital twin structure is exemplified on two public datasets to motivate its adoption and prove its effectiveness

    A Machine Learning-based Digital Twin for Electric Vehicle Battery Modeling

    No full text
    The widespread adoption of EVs is limited by their reliance on batteries with presently low energy and power densities compared to liquid fuels and are subject to aging and performance deterioration over time. For this reason, monitoring the battery state of charge and state of health during the EV lifetime is a very relevant problem. This work proposes the structure of a battery digital twin designed to reflect battery dynamics at the run time accurately. To ensure a high degree of correctness concerning non-linear phenomena, the digital twin relies on data-driven models trained on traces of battery evolution over time: a state of health model, repeatedly executed to estimate the degradation of maximum battery capacity, and a state of charge model, retrained periodically to reflect the impact of aging. The proposed digital twin structure will be exemplified on a public dataset to motivate its adoption and prove its effectiveness, with a high degree of accuracy and inference and retraining times compatible with onboard execution

    Model-Driven Dataset Generation for Data-Driven Battery SOH Models

    No full text
    Estimating the State of Health (SOH) of batteries is crucial for ensuring the reliable operation of battery systems. Since there is no practical way to instantaneously measure it at run time, a model is required for its estimation. Recently, several data-driven SOH models have been proposed, whose accuracy heavily relies on the quality of the datasets used for their training. Since these datasets are obtained from measurements, they are limited in the variety of the charge/discharge profiles. To address this scarcity issue, we propose generating datasets by simulating a traditional battery model (e.g., a circuit-equivalent one). The primary advantage of this approach is the ability to use a simulatable battery model to evaluate a potentially infinite number of workload profiles for training the data-driven model. Furthermore, this general concept can be applied using any simulatable battery model, providing a fine spectrum of accuracy/complexity tradeoffs. Our results indicate that using simulated data achieves reasonable accuracy in SOH estimation, with a 7.2 % error relative to the simulated model, in exchange for a 27X memory reduction and a ≈2000X speedup

    SMARTIC: Smart Monitoring and Production Optimization for Zero-waste Semiconductor Manufacturing

    No full text
    The Industry 4.0 revolution introduced decentralized, self-organizing, and self-learning systems for production control. New machine learning algorithms are getting increasingly powerful to solve real-world problems, like predictive maintenance and anomaly detection. However, many data-driven applications are still far from being optimized to cover many aspects and the complexity of modern industries; correlations between smart monitoring, production scheduling, and anomaly detection/predictive maintenance have only been partially exploited. This paper proposes to develop new data-driven approaches for smart monitoring and production optimization, targeting semiconductor manufacturing, one of the most technologically advanced and data-intensive industrial sectors, where process quality, control, and simulation tools are critical for decreasing costs and increasing yield. The goal is to reduce defect generation at the electronic component level and its propagation to the system- and system-of-systems- level by working on (1) enhanced anomaly detection, based on the human-in-the-loop concept and on advanced treatment of multiple time-series and of domain adaptation, (2) smart and predictive maintenance based on both objective data traces and simulated ones, to mitigate the risk of degrading product quality, and (3) the construction of an extended manufacturing software stack that allows anomaly- and maintenance-aware policies to enhance production line scheduling and optimization
    corecore